Consider PDE
\[ a_{11} u_{xx} + 2 a_{12} u_{xy} + a_{22} u_{yy} + a_{1} u_{x} + a_{2} u_{y} + a_{0} u = 0 \]
By a linear transformation of the independent variables, the equation can be reduced to one of the three forms,
Elliptic case, if \(a^{2}_{12} < a_{11} a_{22}\), then
\[ u_{xx} + u_{yy} + ... = 0 \]
Hyperbolic case, if \(a^{2}_{12} > a_{11} a_{22}\), then
\[ u_{xx} - u_{yy} + ... = 0 \]
Parabolic case, if \(a^{2}_{12} = a_{11} a_{22}\), then
\[ u_{xx} + ... = 0 \]
unless \(a_{11} = a_{12} = a_{22} = 0\).
Suppose that \(a_{11} = 1, a_{1} = a_{2} = a_{0} = 0\). The equation is written as
\[ (\partial_{x} + a_{12} \partial_{y})^{2} u + (a_{22} - a^{2}_{12}) \partial^{2}_{y} u = 0 \]
In the elliptic case, let \(b = (a_{22} - a^{2}_{12})^{1/2} > 0\). Introduce new variables \(\xi, \eta\) by
\[ x = \xi, y = a_{12} \xi + b \eta \]
Then
\[ \begin{align} \partial_{\xi} &= 1 \cdot \partial_{x} + a_{12} \partial_{y} \\ \partial_{\eta} &= 0 \cdot \partial_{x} + b \partial_{y} \\ \end{align} \]
So, the equation becomes
\[ (\partial^{2}_{\xi} + \partial^{2}_{\eta}) u = 0 \]
The other cases have a similar procedure.
The case of PDE with n variables will be added.
\[ u_{tt} = c^{2} u_{xx} \]
General solution is
\[ u(x, t) = f(x+ct) + g(x-ct) \]
\[ \begin{align} u_{t} + c u_{x} &= v \\ v_{t} - c v_{x} &= 0 \\ \end{align} \]
of which the latter has the solution \(v(x, t) = h(x + c t)\).
Then we solve \(u_{t} + c u_{x} = h(x + c t)\) by directly differentiating \(u_{t} + c u_{x}\) with respect to \(\xi = x + c t\).
\[ \begin{align} u_{t} + c u_{x} &= \frac{\partial u}{\partial \xi} \frac{\partial \xi}{\partial t} + c \frac{\partial u}{\partial \xi} \frac{\partial \xi}{\partial x} \\ &= u_{\xi} c + c u_{\xi} \\ &= 2 c u_{\xi} \\ \end{align} \]
Integrate \(2 c u_{\xi} = h(\xi)\) get
\[ \begin{align} u(\xi, \eta) &= \frac{1}{2c} \int h(\xi) d \xi + g(\eta) \\ &= f(\xi) + g(\eta) \\ \end{align} \]
with \(f^{\prime}(\xi) = h(\xi) / 2c\). Then the solution is \(u(x, t) = f(x + ct) + g(x - ct)\).
\[ \xi = x + ct, \quad \eta = x - ct \]
Substitute and get the form
\[ -4 c^{2} u_{\xi \eta} = 0 \]
Then \(u_{\xi \eta} = 0\) since \(c \neq 0\), which is written as
\[ \frac{\partial u}{\partial \eta} = v, \quad \frac{\partial v}{\partial \xi} = 0 \]
of which the latter leads to \(v = v(\eta)\). Substitute and get
\[ \frac{\partial u}{\partial \eta} = v(\eta) \]
Integrate and get
\[ \begin{align} u(\xi, \eta) &= \int v(\eta) d \eta + f(\xi) \\ &= g(\eta) + f(\xi) \\ \end{align} \]
\[ \begin{align} u_{tt} &= c^{2} u_{xx} \\ u(x, 0) &= \phi(x) \\ u_{t}(x, 0) &= \psi (x) \\ \end{align} \]
Find the solution, which means we use the initial value, \(\phi, \psi\), to determine the form of \(f, g\).
First set \(t=0\) in the general solution
\[ \begin{align} u(x, 0) &= f(x) + g(x) = \phi(x) \\ u_{t}(x, 0) &= c f^{\prime}(x) - c g^{\prime}(x) = \psi (x) \\ \end{align} \]
Then solve the above two equations for \(f, g\).
\[ \begin{align} \phi^{\prime} &= f^{\prime} + g^{\prime} \\ \frac{1}{c} \psi &= f^{\prime} - g^{\prime} \\ \Rightarrow & \\ f^{\prime} &= \frac{1}{2} (\phi^{\prime} + \frac{\psi}{c}) \\ g^{\prime} &= \frac{1}{2} (\phi^{\prime} - \frac{\psi}{c}) \\ \end{align} \]
Integrate, we get
\[ \begin{align} f(s) &= \frac{1}{2} \phi(s) + \frac{1}{2c} \int^{s}_{0} \psi(s) dr + A \\ g(s) &= \frac{1}{2} \phi(s) - \frac{1}{2c} \int^{s}_{0} \psi(s) dr + B \\ \end{align} \]
where \(A + B = 0\), since \(\phi = f + g\). Now, we get the solution
\[ u(x, t) = \frac{1}{2} \big [ \phi (x + ct) + \phi (x - ct) \big ] + \frac{1}{2c} \int^{x+ct}_{x-ct} \psi (s) ds \]
which is d’Alembert formula.
The effect of an initial position \(\phi (x)\) is a pair of waves traveling in either direction at speed \(c\) and at half the original amplitude. The effect of an initial velocity \(\psi\) is a wave spreading out at speed \(\leq c\) in both directions. The assertion that the speed \(\leq c\) is called the principle of causality.
The domain of influence is
The domain of dependence is
Imagine an infinite string with constants \(\rho, T\). Then
\[ \begin{align} \rho u_{tt} &= T u_{xx}, \quad -\infty < x < \infty \\ \phi (x) &= \psi (x) = 0, \quad |x| > R \\ \end{align} \]
Define
\[ \begin{align} KE &= \frac{1}{2} \rho \int u^{2}_{t} dx \\ PE &= \frac{1}{2} T \int u^{2}_{x} dx \\ E &= KE + PE \\ \end{align} \]
First
\[ \begin{align} \frac{d KE}{d t} &= \rho \int u_{t} u_{tt} dx \\ &= T \int u_{t} u_{xx} dx \\ &= T u_{t} u_{x} - T \int u_{t x} u_{x} dx \\ \end{align} \]
The terms \(T u_{t} u_{x}\) is evaluated at \(x = \pm \infty\), so it vanishes.
\[ \begin{align} \frac{d KE}{d t} &= - T \int u_{t x} u_{x} dx \\ &= - \frac{d}{d t} \frac{1}{2} T \int u^{2}_{x} dx \\ &= - \frac{d PE}{d t} \\ \frac{d E}{d t} &= \frac{d KE}{d t} + \frac{d PE}{d t} = 0 \\ \end{align} \]
Thus
\[ E = \frac{1}{2} \int^{\infty}_{-\infty} (\rho u^{2}_{t} + T u^{2}_{x}) dx \]
is constant independent of \(t\), which is the law of conservation of energy.
1D diffusion equation
\[ u_{t} = k u_{xx} \]
The maximum value of \(u(x, t)\) which satisfies the diffusion equation in \(0 \leq x \leq l, 0 \leq t \leq T\), can be attained only on the bottom or the lateral sides. Proof is given only for a weaker version and thus is omitted.
The uniqueness of Dirichlet problem for the diffusion equation
\[ \begin{align} u_{t} - k u_{xx} &= f(x, t), 0 < x < l, t > 0 \\ u(x, 0) &= \phi (x) \\ u(0, t) &= g(t) \\ u(l, t) &= h(t) \\ \end{align} \]
can be proved by the maximum principle.
Proof by maximum principle
Let \(w = u_{1} - u_{2}\) be their difference. Then \(w_{t} - k w_{xx} = 0, w(x, 0) = 0, w(0, t) = 0, w(l, t) = 0\). By the maximum principle, \(w(x, t) \leq 0\). By the minimum principle, \(w(x, t) \geq 0\). Therefore, \(w(x, t) = 0\). So, \(u_{1} = u_{2}\), i.e. there is an unique solution.
Proof by energy method
\[ 0 = (w_{t} - k w_{xx}) w = (\frac{1}{2} w^{2})_{t} + (- k w_{x} w)_{x} + k w^{2}_{x} \]
Integrate over the interval \(0 < x < l\),
\[ \begin{align} 0 &= \int^{l}_{0} (\frac{1}{2} w^{2})_{t} dx - \big [ k w_{x} w ]^{l}_{0} + k \int^{l}_{0} w^{2}_{x} dx \\ \Rightarrow & \frac{d}{d t} \int^{l}_{0} \frac{1}{2} \big [ w(x, t) \big ]^{2} dx = -k \int^{l}_{0} \big [ w_{x} (x, t) \big ]^{2} dx \leq 0 \\ \end{align} \]
Therefore,
\[ \begin{align} & \int^{l}_{0} \big [ w(x, t) \big ]^{2} dx \leq \int^{l}_{0} \big [ w(x, t) \big ]^{2} dx = 0, t \geq 0 \\ \Rightarrow & w \equiv 0, \Rightarrow u_{1} = u_{2}, t \geq 0 \\ \end{align} \]
The energy method in case \(h = g = f = 0\), leads to the following form of stability.
\[ \int^{l}_{0} \big [ u_{1} (x, t) - u_{2} (x, t) \big ]^{2} dx \leq \int^{l}_{0} \big [ \phi_{1} (x) - \phi_{2} (x) \big ]^{2} dx \]
Proof by maximum principle
\[ max_{0 \leq x \leq l} | u_{1} (x, t) - u_{2} (x, t) | \leq max_{0 \leq x \leq l} | \phi_{1} (x) - phi_{2} (x) |, t > 0 \]
Solve
\[ \begin{align} u_{t} &= k u_{xx}, -\infty < x < \infty, 0 < t < \infty \\ u(x, 0) &= \phi (x) \\ \end{align} \]
Our method is to solve it for a particular \(\phi (x)\) and then the general solution from this particular one. There are 5 basic invariance properties of the diffusion equation,
The particular solution we look for is \(Q(x, t)\), which satisfies the special initial condition,
\[ \begin{align} Q(x, 0) &= 1, x > 0 \\ Q(x, 0) &= 0, x < 0 \\ \end{align} \]
which does not change under dilation.
Find \(Q(x, t)\) in three steps.
Expect \(Q\) to be the following special form,
\[ Q(x, t) = g(p), p = \frac{x}{\sqrt{4 k t}} \]
due to dilation.
Substitute \(Q\),
\[ \begin{align} Q_{t} &= \frac{d g}{d p} \frac{\partial p}{\partial t} = - \frac{1}{2 t} \frac{x}{\sqrt{4 k t}} g^{\prime} (p) \\ Q_{xx} &= \frac{d}{d p} \big ( \frac{d g}{d p} \frac{\partial p}{\partial x} \big ) \frac{\partial p}{\partial x} = \frac{1}{4kt} g^{\prime \prime} (p) \\ 0 &= Q_{t} - k Q_{xx} = \frac{1}{t} \Big [ - \frac{1}{2} p g^{\prime} (p) - \frac{1}{4} g^{\prime \prime} (p) \Big ] \\ \end{align} \]
Thus
\[ \begin{align} & g^{\prime \prime} + 2 p g^{\prime} = 0 \\ \Rightarrow & Q(x, t) = g(p) = C_{1} \int^{p}_{0} e^{-q^{2}} d q + C_{2} \\ \end{align} \]
Apply the special initial condition, we get the specific form of \(Q\).
\[ Q(x, t) = \frac{1}{\sqrt{\pi}} \int^{x/\sqrt{4kt}}_{0} e^{-p^{2}} dp + \frac{1}{2}, t > 0 \]
Define \(S = \partial Q / \partial x\), which is also a solution. And
\[ u(x, t) = \int^{\infty}_{-\infty} S(x-y, t) \phi(y) dy, t > 0 \]
is also a solution, which in fact is the general solution.
However, the validity of the initial consition \(u(x, 0) = \phi(x)\) is left to be checked.
\[ \begin{align} u(x, t) &= \int^{\infty}_{-\infty} \big ( \frac{\partial Q(x, t)}{\partial x} (x-y, t) \big ) \phi(y) dy \\ &= - \int^{\infty}_{-\infty} \frac{\partial }{\partial y} \big [ Q(x-y, t) \big ] \phi(y) dy \\ &= \int^{\infty}_{-\infty} Q(x-y, t) \phi^{\prime} (y) dy - \Big [ Q(x-y, t) \phi(y) \Big ]^{\infty}_{-\infty} \\ \end{align} \]
Assume \(\phi(y) = 0\) for \(|y|\) large. And recall the special initial condition. We have
\[ \begin{align} u(x, 0) &= \int^{\infty}_{-\infty} Q(x-y, t) \phi^{\prime} (y) dy \\ &= \int^{x}_{-\infty} \phi^{\prime} (y) dy = \big [ \phi \big ]^{x}_{-\infty} = \phi(x) \\ \end{align} \]
Here, we confirm the general solution is
\[ \begin{align} u(x, t) &= \int^{\infty}_{-\infty} S(x-y, t) \phi(y) dy, t > 0 \\ S &= \frac{\partial Q}{\partial x} = \frac{1}{2 \sqrt{\pi k t}} e^{-x^{2}/4kt} \\ \end{align} \]
\(S(x, t)\) is known as the source function, Green’s function, fundamental solution, gaussian, or propagator of the diffusion, or simply the diffusion kernel.
The source function is illustrated as
The area2 under its graph is
\[ \int^{\infty}_{-\infty} S(x, t) dx = \frac{1}{\sqrt{\pi}} \int^{\infty}_{-\infty} e^{-q^{2}} dq = 1 \]
We can regard the source function as the weighting function. Based on it, the value of the solution \(u(x, t)\) is a kind of weighted average of the initial values around the point \(x\).
Additional Note on the derivatives of integrals for which the limit of integration is a function of the variable of differentiation.3
Find
\[ \frac{d}{dx} \int^{x^{2}}_{1} tan(t^{3}) dt \]
To find this derivative, first write the function defined by the integral as a composition of two functions \(h(x), g(x)\), as follows,
\[ h(x) = \int^{x}_{1} tan(t^{3}) dt, g(x) = x^{2} \]
Since
\[ h(g(x)) = \int^{g(x)}_{1} tan(t^{3}) dt = \int^{x^{2}}_{1} tan(t^{3}) dt \]
The derivative of a composition of two functions is found using the chain rule,
\[ \frac{d}{dx} h(g(x)) = h^{\prime}(g(x)) g^{\prime}(x) \]
The derivative of \(h(x)\) uses the fundamental theorem of calculus, while the derivative of \(g(x)\) is easy,
\[ h^{\prime} (x) = \frac{d}{dx} \int^{x}_{1} tan(t^{3}) dt = tan(x^{3}), g^{\prime}(x) = 2x \]
Therefore,
\[ \begin{align} \frac{d}{dx} \int^{x^{2}}_{1} tan(t^{3}) dt &= \frac{d}{dx} h(g(x)) \\ &= h^{\prime} (g(x)) g^{\prime}(x) \\ &= tan((x^{2})^{3}) \cdot 2x \\ &= 2x tan(x^{6}) \\ \end{align} \]
Omitted.
Consider the homogeneous Dirichlet conditions.
\[ \begin{align} u_{tt} - c^{2} u_{xx} &= 0, 0 < x < l \\ u(x, 0) &= \phi(x) \\ u_{t}(x, 0) &= \psi(x) \\ u(0, t) &= u(l, t) = 0 \\ \end{align} \]
Our method is to build up the general solution as a linear combination of special ones.
The special one is a separated solution of the form
\[ u(x, t) = X(x) T(t) \]
Plug the form into the wave equation and divide by \(-c^{2} X T\), we get
\[ -\frac{T^{\prime \prime}}{c^{2} T} = -\frac{X^{\prime \prime}}{X} = \lambda \]
\(\lambda\) must be a positive constant, of which the positivity can be proved, since \(\partial \lambda / \partial x = \partial \lambda / \partial t = 0\).
Let \(\lambda = \beta^{2}\), where \(\beta > 0\). Then we write
\[ \begin{align} X^{\prime \prime} + \beta X &= 0 \\ T^{\prime \prime} + c^{2} \beta^{2} T &= 0 \\ \end{align} \]
of which the solution is
\[ \begin{align} X(x) &= C cos (\beta x) + D sin (\beta x) \\ T(x) &= A cos (\beta c t) + B sin (\beta c t) \\ \end{align} \]
where \(A, B, C, D\) are constants.
Next is to impose the boundary conditions.
\[ \begin{align} 0 &= X(0) = C \\ 0 &= X(l) = D sin (\beta l) \\ \end{align} \]
Obviously, we are not interested in the trivial solution \(C = D = 0\). So, \(\beta l = n \pi\). Therefore,
\[ \begin{align} \lambda_{n} &= (\frac{n \pi}{l})^{2} \\ X_{n}(x) &= sin (\frac{n \pi x}{l}), n = 1, 2, 3, ... \\ \end{align} \]
where the constant before sine function is omitted.
So, there are infinite number of separated solutions.,
\[ u_{n}(x, t) = \big ( A_{n} cos (\frac{n \pi c t}{l}) + B_{n} sin (\frac{n \pi c t}{l}) \big ) sin \frac{n \pi x}{l}, n = 1, 2, 3, ... \]
of which a linear combination is also a solution.
The initial conditions require
\[ \begin{align} \phi(x) &= \sum_{n} A_{n} sin (\frac{n \pi x}{l}) \\ \psi(x) &= \sum_{n} \frac{n \pi x}{l} B_{n} sin (\frac{n \pi x}{l}) \\ \end{align} \]
which seems special but practically true for any function if taken as infinite series.
The diffusion problem is similar.
The numbers \(\lambda_{n} = (n \pi / l)^{2}\) are called eigenvalues and the functions \(X_{n}(x) = sin (n \pi x / l)\) are called eigenfunctions. The reasons for this terminology is as follows. They satisfy the conditions
\[ - \frac{d^{2}}{d x^{2}} X = \lambda X, X(0) = X(l) = 0 \]
Let \(A\) denotes the operator \(- d^{2} / d x^{2}\), which acts on the functions that satisfy the Dirichlet boundary conditions. The differential equation has the form \(A X = \lambda X\). An eigenfunction is a solution \(X \neq 0\) of this equation and an eigenfunction is a number \(\lambda\) for which there exists a solution \(X \neq 0\). Recall the familiar case of \(N \times N\) matrix \(A\). There are at most \(N\) eigenvalues. But in the case of differential operator, there are an infinite number of eigenvalues. Then you might say that we are dealing with infinite dimensional linear algebra. In physics and engineering, the eigenfunctions are called normal modes, because they are the natural shapes of solutions that persist for all time.
Under the condition, the problem becomes
\[ \begin{align} - X^{\prime \prime} &= \lambda X \\ X^{\prime}(0) &= X^{\prime}(l) = 0 \\ \end{align} \]
By the similar method, the solutions are
\[ \begin{align} \lambda_{n} &= (\frac{n \pi}{l})^{2}, n = 0, 1, 2, ... \\ X_{n}(x) &= cos (\frac{n \pi x}{l}) \\ \end{align} \]
Note that \(n=0\) is included among them.
to be added.
Refer to Link.
Refer to Link.
Define the inner product of \(f(x), g(x)\), two real-valued continuous functions defined on an interval \(a \leq x \leq b\), to be the integral of their product.
\[ (f, g) \equiv \int^{b}_{a} f(x) g(x) dx \]
Call \(f(x), g(x)\) orthogonal if \((f, g) = 0\).
Note: every eigenfunction is orthogonal to every other eigenfunction, in the case of Dirichlet, Neumann, Periodic, Robin condition, which is the base on which Fourier series work. Proof is as follows.
\[ \begin{align} \lambda_{1} X_{1} X_{2} - \lambda_{2} X_{1} X_{2} &= - X^{\prime \prime}_{1} X_{2} + X_{1} X^{\prime \prime}_{2} = (- X^{\prime}_{1} X_{2} + X_{1} X^{\prime}_{2})^{\prime} \\ (\lambda_{1} - \lambda_{2}) \int^{b}_{a} X_{1} X_{2} dx &= \int^{b}_{a} - X^{\prime \prime}_{1} X_{2} + X_{1} X^{\prime \prime}_{2} dx = [- X^{\prime}_{1} X_{2} + X_{1} X^{\prime}_{2}]^{b}_{a} \\ \end{align} \]
of which the second equation is called Green’s second identity. Under the four cases, it’s equal to \(0\). Thus, it’s orthogonal.
Let use envision any pair of boundary conditions
\[ \begin{align} \alpha_{1} X(a) + \beta_{1} X(b) + \gamma_{1} X^{\prime}(a) + \delta_{1} X^{\prime}(b) &= 0 \\ \alpha_{2} X(a) + \beta_{2} X(b) + \gamma_{2} X^{\prime}(a) + \delta_{2} X^{\prime}(b) &= 0 \\ \end{align} \]
involving \(8\) real constants. Such set of boundary conditions is called symmetric if
\[ \Big [ f^{\prime}(x) g(x) - f(x) g^{\prime}(x) \Big ]^{x=b}_{x=a} = 0 \]
All standard boundary conditions are symmetric.
Theorem 1 is to be added.
Note: If there are two eigenfunctions, \(X_{1}(x), X_{2}(x)\) with eigenvalues \(\lambda_{1} = \lambda_{2}\), then they don’t have to be orthogonal. However, they can be made so by the Gram-Schmidt orthogonalization procedure if not.
If functions are complex valued, define the inner product on \((a, b)\) as
\[ (f, g) = \int^{b}_{a} f(x) \bar{g(x)} dx \]
where the bar denotes the complex conjugate. If \((f, g) = 0\), they are called orthogonal.
The following is to be added.
\[ \nabla^{2} u = 0 \] A solution of the Laplace equation is called a harmonic function.
Poisson’s equation
\[ \nabla^{2} u = f \]
with \(f\) a given function.
Let \(V\) be a connected bounded open set.4 Let \(u\) be a harmonic function in \(V\) that is continuous on \(\bar{V} = V \cup \partial V\). Then the maximum and the minimum values of \(u\) are attained on \(\partial V\) and nowhere inside. The idea of the maximum principle is as follows. If there is a maximum point inside \(V\), we’d have \(\nabla^{2} u \leq 0\). At most maximum points \(\nabla^{2} u < 0\), which is a contradiction to Laplace’s equation. However, it’s possible \(\nabla^{2} u = 0\) at a maximum point, and it requires a little more proof, which is omitted here.
The meaning of second order derivative at maximum point is illustrated as follows.
Suppose two solutions
\[ \begin{align} \nabla^{2} u = f, & \nabla^{2} v = f, & in V \\ u = h, & v = h, & on \partial V \\ \end{align} \]
Let \(w = u - v\), then
\[ \begin{align} \nabla^{2} w = 0, & in V \\ w = 0, & on \partial V \\ \end{align} \]
By the maximum principle,
\[ 0 = w(x_{m}) \leq w(x) \leq w(x_{M}) = 0 \]
Therefore, \(w \equiv 0, u \equiv v\).
Proof is omitted. The operator \(\nabla^{2}\) is isotropic. Special harmonic function that itself is rotationally invariant is \(u = C_{1} ln (r) + C_{2}\).
Proof is omitted. Special harmonic function is \(u = - C_{1} r^{-1} + C_{2}\).
Omitted.
Consider the Dirichlet problem for a circle,
\[ \begin{align} u_{xx} + u_{yy} &= 0, & x^{2} + y^{2} < a^{2} \\ u &= h(\theta), & x^{2} + y^{2} = a^{2} \\ \end{align} \]
Our method is to separate variables in polar coordinates5, \(u = R(r) \Theta (\theta)\).
\[ \begin{align} 0 &= u_{xx} + u_{yy} \\ &= R^{\prime \prime} \Theta + \frac{1}{r} R^{\prime} \Theta + \frac{1}{r^{2}} R \Theta^{\prime \prime} \\ \end{align} \]
Rearrange, we get
\[ \begin{align} \Theta^{\prime \prime} + \lambda \Theta &= 0 \\ r^{2} R^{\prime \prime} + r R^{\prime} - \lambda R &= 0 \\ \end{align} \]
which are two ODE, requiring boundary conditions.
For \(\Theta (\theta)\), naturally, we provide periodic BCs,
\[ \Theta (\theta + 2 \pi) = \Theta (\theta), -\infty < \theta < \infty \]
Thus
\[ \begin{align} \lambda &= n^{2} \\ \Theta (\theta) &= A cos(n \theta) + B sin (n \theta), n = 1, 2, 3, ... \\ &= C_{0}, n = 0 \\ \end{align} \]
(to be added)
There,
\[ u(r, \theta) = (a^{2} - r^{2}) \int^{2 \pi}_{0} \frac{h(\phi)}{a^{2} - 2 a r cos (\theta - \phi) + r^{2}} \frac{d \phi}{2 \pi} \]
which is known as Poisson’s formula.
\(v(x, t) = u(\sqrt{a} x, a t), v_{t} = a u_{t} = a k u_{xx} = k (a u_{xx}) = k v_{xx}\)↩︎
\(\int e^{-x^{2}} dx = \frac{1}{2} \sqrt{\pi} erf(x) + C, \int^{\infty}_{0} e^{-x^{2}} dx = \frac{\sqrt{\pi}}{2}\), \(erf(x)\) is the error function.↩︎
By an open set, we mean a set that includes none of its boundary points.↩︎
The transformation between Cartesian coordinate and polar coordinate can be achieved by derivatives of compound function.↩︎